Hi All, We have a NetApp cluster, OLDNode-01 OLDNode-02 OLDNode-03 OLDNode-04 NEWNode-05 NEWNode-06 NewNodes-05 and 06 are our new controllers and I believe I have finished migrating everything from OLDNodes-01-04 over to 05 and 06. We next want to remove Nodes 01 to 04 from our cluster completely. Is there a nice easy way to do this? Is it non-disruptive? Are there any final checks I need to do? Is ONTAP smart enough to know if things could go wrong and tell us beforehand? Just wondered what your experiences were. If there are things we need to know and basically make sure this runs as smooth as possible without disruption (or a minimum)
... View more
Hello, I am having installation hang problem while installing ONTAP Mediator 1.9.1 How I try to install: Extracted installer archive /opt/ontap/ and on /opt/ontap/ontap-mediator-1.9.1 run: ./ontap-mediator-1.9.1 last part of the installation log:
[2025-06-26 12:37:53]
[2025-06-26 12:37:53] If you want SCST to start automatically at boot time, run the following command:
[2025-06-26 12:37:53] systemctl enable scst.service
[2025-06-26 12:37:53]
[2025-06-26 12:37:53] make[1]: Leaving directory '/opt/ontap/ontap-mediator-1.9.1/ontap_mediator.Y5g7HJ/ontap-mediator-1.9.1/ontap-mediator-1.9.1/dist/scst/scst-3.8.0/scstadmin'
[2025-06-26 12:37:56] Saving SCST mod keys
[2025-06-26 12:37:56] Installing ONTAP Mediator server packages
[2025-06-26 12:37:56] Installing dist/rpm/ontap-mediator-*el8*
[2025-06-26 12:37:56] Preparing packages...
[2025-06-26 12:37:56] ontap-mediator-1.9.1-16.el8.noarch
[2025-06-26 12:38:25] Finalizing ontap_mediator module
[2025-06-26 12:38:25] Creating mediator service acount + group: netapp:netapp
[2025-06-26 12:38:25] Modifying ownership and permissions to mediator service acount + group: netapp:netapp
[2025-06-26 12:38:25] Configuring and setting ownership for mailbox_directory: /mnt/iscsi_space
[2025-06-26 12:38:26] Checking for pre-existing modification to pyenv/bin/uwsgi SElinux context...
[2025-06-26 12:38:26] No pre-existing modification found. Modifying context...
[2025-06-26 12:38:29] pyenv/bin/uwsgi context modified!
[2025-06-26 12:38:29] Adding Subject Alternative Names to the self-signed server certificate
[2025-06-26 12:38:29] #
[2025-06-26 12:38:29] # OpenSSL example configuration file.
[2025-06-26 12:38:29] Generating self-signed certificates
[2025-06-26 12:38:29] For root_ca.key:
[2025-06-26 12:38:29] Generating RSA private key, 4096 bit long modulus (2 primes)
[2025-06-26 12:38:30] .................................................................................................................++++
[2025-06-26 12:38:30] .........................................++++
[2025-06-26 12:38:30] e is 65537 (0x010001)
[2025-06-26 12:38:30] writing RSA key
[2025-06-26 12:38:30] For intermediate.key:
[2025-06-26 12:38:30] Generating RSA private key, 4096 bit long modulus (2 primes)
[2025-06-26 12:38:30] ..................++++
[2025-06-26 12:38:30] ..........................................................................................................................++++
[2025-06-26 12:38:30] e is 65537 (0x010001)
[2025-06-26 12:38:30] writing RSA key
What I've Already Tried
Fresh RHEL 8.2 Azure VM deployment (multiple times)
Installed all pre-req RPMs offline:
python39, python39-libs, python39-devel
python39-pip-wheel, python39-setuptools-wheel
chkconfig-1.19.2 (since alternatives was a missing dep)
Verified entropy is not the issue (e.g., /dev/random, getrandom() syscall monitored via strace)
getenforce was set to Enforcing, later changed to Permissive
Still, full installer hangs again during the "writing RSA key" phase
Questions for the Community
Has anyone else seen the RSA generation hang on RHEL 8.2 / Azure specifically?
Is there a known bug in Mediator 1.9.1 related to the cert gen script?
Could there be an issue with OpenSSL or Azure's VM entropy model, even though it's not /dev/random related?
Is there a fully manual install method that bypasses the RSA keygen part?
Thank you in advance.
... View more
I have a FAS8300 (no disks), a DS4246 shelf (24 x 200 GB SSD) and a DS460 shelf (60 x 4 TB SAS). These were all previously in use in other clusters and I'm trying to set them up as a new switchless cluster. At the boot menu, I chose option 4 ("Clean configuration and initialize all disks"). However, ONTAP failed to boot, saying the following: BOOTMGR: The system has 2 disks assigned whereas it needs 3 to boot, will try to assign the required number. May 30 19:20:55 [localhost:raid.autoPart.disabled:ALERT]: Disk auto-partitioning is disabled on this system: the system needs a minimum of 8 usable hard disks. BOOTMGR: already_assigned=2, min_to_boot=3, num_assigned=0 May 30 19:20:55 [localhost:callhome.raid.adp.disabled:notice]: Disk auto-partitioning is disabled on this system: ADP DISABLED. May 30 19:20:55 [localhost:diskown.split.shelf.assignStatus:notice]: Split-shelf based automatic drive assignment is "disabled". May 30 19:20:55 [localhost:cf.fm.noMBDisksOrIc:error]: Could not find the local mailbox disks. Could not determine the firmware state of the partner through the HA interconnect. . Terminated Looking at the connections, I see link lights between the two shelves but I don't see link lights on either shelf on the ports connected to the FAS8300. I did the cabling per this setup guide: https://docs.netapp.com/us-en/ontap-systems/media/PDF/215-14512_2021-02_en-us_FAS8300orFAS8700_ISI.pdf Controller to shelf: Node 1, port 0a --> DS4246, IOM A square port Node 2, port 0a --> DS4246, IOM B, square port Node 1, port 0d --> DS460, IOM B, port 3 Node 2, port 0d --> DS460, IOM A, port 3 Shelf to shelf: DS4246, IOM A, circle port --> DS460, IOM A, port 1 DS4246, IOM B, circle port --> DS460, IOM B, port 1 Any ideas what I'm doing wrong here?
... View more
Hi Everyone, I've been told that we have an SVM that is using Active Directory Domain Tunnelling. When I run the command "security login domain-tunnel show" It shows me that it is indded enabled and assigned to one of our SVMs Can anyone tell me what this is actually doing. Looking at the SVM it only actually has a few shares on it. So is this for the entire cluster? or because its on a specific SVM is it only working for what ever is on that SVM? From this article - security login domain-tunnel create I cant work out if its working as a global thing, or just for access to that SVM. Thanks in advance if anyone can clear this up
... View more
I'm testing Snapshot locking/SnapLock and I'm seeing behavior that I didn't expect. I have a Snapshot policy configured to take hourly Snapshots with Maximum Snapshot copies = 4 and SnapLock retention period = 4. I always have 5 hourly Snapshot copies though instead of 4. This screen grab was taken at 2:25 PM. I don't expect the the Snapshot copy from 2:05 PM to still exist. I'm pretty sure that this is because the SnapLock retention period doesn't expire in time for it to be deleted. It must be a matter of seconds. On a volume with a large amount of data and/or changes between Snapshots, that extra one could end up taking lots of storage. I just want to ask if others have experienced this and if it's expected, or if there's a workaround? I have a support case for this and the TSE did not have an immediate answer but will look into it.
... View more